Gesture control for lab robots | Laboratory News

2021-12-22 06:30:12 By : Mr. Allen Jiang

Featured Articles   Accelerating R&D innovation in 2022 and beyond

Featured Articles   Accelerating R&D innovation in 2022 and beyond

Lab environments differ from factory floors. Robotics expert, Stephen Guy, explores how close human integration, collaborative robotics, and gesture control can help safely control collaborative robots in the busy, confined and often noisy space of a lab environment...

Key success factors will be ensuring that the system is perceived by staff to be safe and also likeable.

The rush to develop effective COVID-19 tests and successful vaccination programme roll-out put unprecedented pressure on laboratory staff and highlighted the need for further automation. But lab environments differ markedly from factory floors, meaning that robots need to work in confined spaces and integrate more closely with humans.

While exploring how laboratory staff can optimise their interaction with automation and instrumentation, we are investigating how gesture controls may be used with collaborative robotics to ensure a safe and confident working environment. Research suggests that voice and physical gestures have a key role to play to communicate with collaborative robots. But laboratories often have high background acoustic noise from environmental control systems and general benchtop equipment, such as centrifuges and shakers. They are also busy places where staff not only perform experiments but share knowledge and exchange ideas. The idea that this space is shared with robots that are instructed and controlled solely by voice commands is therefore an unattractive prospect.

We are currently investigating how collaborative robots may instead be controlled by physical gestures, whilst also using a specific physical gesture vocabulary to indicate their own status to users. For example, collaborative robotics may be taught to recognise ‘Halt’ and ‘Start’ gestures, and for staff to recognise gestures generated by the robot such as ‘Sleep’, ‘System Standby’ and ‘Error’.

The recognition of physical gestures also opens up the possibility of collaborative robots communicating together in groups to streamline processes and improve efficiency.

Recent research has focussed on the use of data-gloves to determine gestures, but this is not practical in laboratories where staff are required to be ‘hands-free’ and often wear specialised protective gloves. Instead, we have taken a multi-sensor approach to detecting physical gestures including vision sensors.

A helpful factor when trying to monitor gestures in a laboratory environment is that the lighting is generally of high luminosity and consistent, allowing visual images of high quality. Also, the short physical distance between robots and staff means that the physical effort required by staff to generate a gesture is low and the field of view of the sensors can be well focussed.

To enable a multi-sensor array to be retrofitted to existing robotics, sensors may be configured in the form of a ‘wearable’ sleeve. This is an effective approach as it is not robot specific, allowing implementation on a wide range of robot types. It is also driving innovation in how we think about the re-design of the sensors themselves. An important consideration is that the sleeve must be cleanable, using standard ethanol/water mixtures and laboratory disinfectants, and sterilisable using UV or hydrogen peroxide.

To provide feedback to the user that the collaborative robotics has responded to a gesture, lab staff could be provided with a wearable device themselves. This approach will potentially be helped by the proliferation of consumer digital wearable devices that already exist, such as smart watches. Data from multiple sensors can be overlayed to create a map of the scene from which the region of interest is extracted. The gesture is determined by matching patterns with those stored in a gesture database. The appropriate response for the specific gesture is determined and instructions sent to the robot controller.

Generalised features characterising the gesture include size and arc, plane, speed, and abruptness of movement. The larger the feature set, the greater the filtering or elimination of irrelevant gestures, but also the longer processing required. The richness of information contained in the gesture database is key, and research is presently considering the benefits of a ‘machine learning’ approach. It is interesting to consider if this should be built upon data collected from a large number of random staff, or only those staff permitted to use the system.

Early adoption of collaborative robotics is well suited to laboratory applications as processes are generally well defined and structured, especially if they are covered by regulatory requirements. There is no great need to communicate subtly, as might be the case for collaborative robotics in applications where there is less formal structure such as personalised care or education. In these applications one could imagine that the range of gestures would need to be greatly expanded. It would be interesting to explore the scalability of this approach with additional peripherals such as data gloves, voice commands and augmented reality applications.

As the population grows, we will enter the era of ‘big health’ requiring complex analysis of millions of samples drawn from across the population. This will create a demand for extensive laboratory automation to remove the burden of tedious and repetitive tasks from human operators. There will be increasing pressure to bring robotics out from behind large and expensive enclosures to use laboratory space more efficiently and streamline processes, and this is where collaborative robotics can help.

Key success factors will be ensuring that the system is perceived by staff to be safe and also likeable. Perceived safety may have more to do with the response of the collaborative robot to the gesture, such as speed, abruptness, and displacement of the motion, than to the nature of the gestures themselves. The challenges of realising collaborative robotics in the laboratory are multiple, but the potential rewards in terms of reduced costs and enhanced efficiency justify further investigation. It is hard to imagine one organisation meeting all the challenges involved, so forming meaningful strategic partnerships including potential end-users and equipment suppliers, is key. The pandemic has shown that governments and other agencies need to be fully prepared for future eventualities, which will require investment in automation technologies to support the life science supply chain.

Author: Stephen Guy is chief technical officer at the Plextek and Design Momentum Life Science Partnership designmomentum.co.uk